Partial-monotone adaptive submodular maximization

نویسندگان

چکیده

Many AI/Machine learning problems require adaptively selecting a sequence of items, each selected item might provide some feedback that is valuable for making better selections in the future, with goal maximizing an adaptive submodular function. Most existing studies this field focus on either monotone case or non-monotone case. Specifically, if utility function and submodular, Golovin Krause (J Artif Intell Res 42:427–486, 2011) developed $$(1-1/e)$$ approximation solution subject to cardinality constraint. For cardinality-constrained case, Tang (Theor Comput Sci 850:249–261, 2021) showed random greedy policy attains ratio 1/e. In work, we generalize above mentioned results by studying partial-monotone maximization problem. To end, introduce notation monotonicity $$m\in [0,1]$$ measure degree Our main result show constraints, has m it then $$m(1-1/e)+(1-m)(1/e)$$ . Notably recovers aforementioned 1/e ratios when $$m = 1$$ 0$$ , respectively. We further extend our consider knapsack constraint develop $$(m+1)/10$$ general One important implication even function, still can attain close “close” This leads improved performance bounds many machine applications whose functions are almost monotone.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-Monotone Adaptive Submodular Maximization

A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies f...

متن کامل

Robust Monotone Submodular Function Maximization

Instances of monotone submodular function maximization with cardinality constraint occur often in practical applications. One example is feature selection in machine learning, where in many models, adding a new feature to an existing set of features always improves the modeling power (monotonicity) and the marginal benefit of adding a new feature decreases as we consider larger sets (submodular...

متن کامل

Constrained Maximization of Non-Monotone Submodular Functions

The problem of constrained submodular maximization has long been studied, with near-optimal results known under a variety of constraints when the submodular function is monotone. The case of nonmonotone submodular maximization is not as well understood: the first approximation algorithms even for unconstrainted maximization were given by Feige et al. [FMV07]. More recently, Lee et al. [LMNS09] ...

متن کامل

Maximization of Non-Monotone Submodular Functions

A litany of questions from a wide variety of scientific disciplines can be cast as non-monotone submodular maximization problems. Since this class of problems includes max-cut, it is NP-hard. Thus, general purpose algorithms for the class tend to be approximation algorithms. For unconstrained problem instances, one recent innovation in this vein includes an algorithm of Buchbinder et al. (2012)...

متن کامل

Monotone Submodular Maximization over a Matroid

In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not purport to be exhaustive.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Combinatorial Optimization

سال: 2022

ISSN: ['1573-2886', '1382-6905']

DOI: https://doi.org/10.1007/s10878-022-00965-9